Goto

Collaborating Authors

 recommender algorithm


Context-aware adaptive personalised recommendation: a meta-hybrid

Tibensky, Peter, Kompan, Michal

arXiv.org Artificial Intelligence

Recommenders take place on a wide scale of e-commerce systems, reducing the problem of information overload. The most common approach is to choose a recommender used by the system to make predictions. However, users vary from each other; thus, a one-fits-all approach seems to be sub-optimal. In this paper, we propose a meta-hybrid recommender that uses machine learning to predict an optimal algorithm. In this way, the best-performing recommender is used for each specific session and user. This selection depends on contextual and preferential information collected about the user. We use standard MovieLens and The Movie DB datasets for offline evaluation. We show that based on the proposed model, it is possible to predict which recommender will provide the most precise recommendations to a user. The theoretical performance of our meta-hybrid outperforms separate approaches by 20-50% in normalized Discounted Gain and Root Mean Square Error metrics. However, it is hard to obtain the optimal performance based on widely-used standard information stored about users.


ARTAI: An Evaluation Platform to Assess Societal Risk of Recommender Algorithms

Ruan, Qin, Xu, Jin, Dong, Ruihai, Younus, Arjumand, Mai, Tai Tan, O'Sullivan, Barry, Leavy, Susan

arXiv.org Artificial Intelligence

Societal risk emanating from how recommender algorithms disseminate content online is now well documented. Emergent regulation aims to mitigate this risk through ethical audits and enabling new research on the social impact of algorithms. However, there is currently a need for tools and methods that enable such evaluation. This paper presents ARTAI, an evaluation environment that enables large-scale assessments of recommender algorithms to identify harmful patterns in how content is distributed online and enables the implementation of new regulatory requirements for increased transparency in recommender systems.


Cross-Dataset Propensity Estimation for Debiasing Recommender Systems

Li, Fengyu, Dean, Sarah

arXiv.org Artificial Intelligence

Datasets for training recommender systems are often subject to distribution shift induced by users' and recommenders' selection biases. In this paper, we study the impact of selection bias on datasets with different quantization. We then leverage two differently quantized datasets from different source distributions to mitigate distribution shift by applying the inverse probability scoring method from causal inference.


Hidden Author Bias in Book Recommendation

Daniil, Savvina, Cuper, Mirjam, Liem, Cynthia C. S., van Ossenbruggen, Jacco, Hollink, Laura

arXiv.org Artificial Intelligence

Collaborative filtering algorithms have the advantage of not requiring sensitive user or item information to provide recommendations. However, they still suffer from fairness related issues, like popularity bias. In this work, we argue that popularity bias often leads to other biases that are not obvious when additional user or item information is not provided to the researcher. We examine our hypothesis in the book recommendation case on a commonly used dataset with book ratings. We enrich it with author information using publicly available external sources. We find that popular books are mainly written by US citizens in the dataset, and that these books tend to be recommended disproportionally by popular collaborative filtering algorithms compared to the users' profiles. We conclude that the societal implications of popularity bias should be further examined by the scholar community.


On Sampled Metrics for Item Recommendation

Communications of the ACM

Recommender systems personalize content by recommending items to users. Item recommendation algorithms are evaluated by metrics that compare the positions of truly relevant items among the recommended items. To speed up the computation of metrics, recent work often uses sampled metrics where only a smaller set of random items and the relevant items are ranked. This paper investigates such sampled metrics in more detail and shows that they are inconsistent with their exact counterpart, in the sense that they do not persist relative statements, for example, recommender A is better than B, not even in expectation. Moreover, the smaller the sample size, the less difference there is between metrics, and for very small sample size, all metrics collapse to the AUC metric. We show that it is possible to improve the quality of the sampled metrics by applying a correction, obtained by minimizing different criteria. We conclude with an empirical evaluation of the naive sampled metrics and their corrected variants. To summarize, our work suggests that sampling should be avoided for metric calculation, however if an experimental study needs to sample, the proposed corrections can improve the quality of the estimate. Recommender systems are a key technology in online platforms for personalizing the selection of items that are shown to a user. Examples include recommending which products to buy, which videos to watch or which songs to play. Recommendations are typically user-dependent and often context-dependent. A key operation of recommender systems is to retrieve a ranked list of the best items for a user in a particular context.


Using Reviews to Create a Recommender System That Works

#artificialintelligence

If you have ever bought a product online and marveled at the inanity and non-applicability of the'related items' that haunt the buying and after-sales process, you already understand that popular and mainstream recommender systems tend to fall short in terms of understanding the relationships between prospective purchases. If you buy a unlikely and infrequent item, such as an oven, recommendations for other ovens are likely to be superfluous, though the worst recommender systems fail to acknowledge this. In the 2000s, for example, TiVO's recommender system created an early controversy in this sector by reassigning the perceived sexuality of a user, who subsequently sought to're-masculinize' his user profile by selecting war movies – a crude approach to algorithm revision. Worse yet, you don't need to actually buy anything at (for instance) Amazon, or actually begin watching a movie whose description you're browsing at any major streaming platform, in order for information-starved recommender algorithms to start merrily down the wrong path; searches, dwells and clicks into the'details' pages are enough, and this scant (and probably incorrect) information is likely to be perpetuated across future browsing sessions at the platform. Sometimes it's possible to intervene: Netflix provides a'thumbs up/down' system which should in theory help its machine learning algorithms remove certain embedded concepts and words from your recommendations profile (though its efficacy has been questioned, and it remains much easier to evolve a personalized recommender algorithm from scratch than it is to remove undesired ontologies), while Amazon lets you remove titles from your customer history, which should downgrade any unwelcome domains that infiltrated your recommendations.


9 Must-Have Datasets for Investigating Recommender Systems

@machinelearnbot

Bio: Alexander Gude is currently a data scientist at Lab41 working on investigating recommender system algorithms. He holds a BA in physics from University of California, Berkeley, and a PhD in Elementary Particle Physics from University of Minnesota-Twin Cities. About: Lab41 is a "challenge lab" where the U.S. Intelligence Community comes together with their counterparts in academia, industry, and In-Q-Tel to tackle big data. It allows participants from diverse backgrounds to gain access to ideas, talent, and technology to explore what works and what doesn't in data analytics.


Recommended For You: How machine learning helps you choose what to consume next - Science in the News

#artificialintelligence

Ever wonder how music-streaming services such as Spotify and Pandora find songs that you like? Or how Facebook and Google find stories that are interesting to you? Many technology companies use machine learning algorithms to give personalized product suggestions; these algorithms can be found everywhere on the internet. One such algorithm may have even led you to the Science in the News article that you are now reading. Essentially, an algorithm is a set of instructions detailing how to complete a certain task.


Xavier Amatriain's answer to How do I combine more than one recommender algorithms? - Quora

#artificialintelligence

I won't go into details of how to do this, but in a few words all you need to do is to train your independent models and then use their predictions as features of another ML model that puts them together. This ensemble layer can be as simple as a logistic regression or as complex as a deep Neural Network, but given your question I would definitely encourage you to start with the simplest model possible.


9 Must-Have Datasets for Investigating Recommender Systems

#artificialintelligence

Bio: Alexander Gude is currently a data scientist at Lab41 working on investigating recommender system algorithms. He holds a BA in physics from University of California, Berkeley, and a PhD in Elementary Particle Physics from University of Minnesota-Twin Cities. About: Lab41 is a "challenge lab" where the U.S. Intelligence Community comes together with their counterparts in academia, industry, and In-Q-Tel to tackle big data. It allows participants from diverse backgrounds to gain access to ideas, talent, and technology to explore what works and what doesn't in data analytics.